68 research outputs found

    HIRIS Performance Study

    Get PDF
    In this report, the remote sensing system simulation is used to study a proposed sensor concept. An overview of the instrument and its parameters is presented, along with the model of the instrument as implemented in the simulation. Signal-to-noise levels of the instrument under a variety of system configurations are presented and discussed. Classification performance under these varying configurations is also shown, along with relationships between signal-to-noise ratios, feature selection, and classification performance

    An Examination of Enhanced Atmospheric Methane Detection Methods for Predicting Performance of a Novel Multiband Uncooled Radiometer Imager

    Get PDF
    To evaluate the potential for a new uncooled infrared radiometer imager to detect enhanced atmospheric levels of methane, three different analysis methods were examined. A single-pixel brightness temperature to noise-equivalent delta temperature (NEdT) comparison study performed using data simulated from MODTRAN6 revealed that a single thermal band centered on the 7.68 µm methane feature leads to a detectable brightness temperature difference exceeding the sensor noise level for a plume of about 17 ppm at ambient atmospheric temperature compared to an ambient plume with no enhanced methane present. Application of a normalized differential methane index method, a novel approach for methane detection, demonstrated how a simple two-band method can be utilized to detect a plume of methane that is 10 ppm above ambient atmospheric concentration and −10 K from ambient atmospheric temperature with an 80 % hit rate and 17 % false alarm rate. This method was capable of detecting methane with similar levels of success as the third method, a proven multichannel method, matched filter. The matched-filter approach was performed with six spectral channels. Results from these examinations suggest that given a high enough concentration and temperature contrast, a multispectral system with a single band allocated to a methane absorption feature can detect enhanced levels of methane

    Spectral Quality Metrics for Terrain Classification

    Get PDF
    Current image quality approaches are designed to assess the utility of single band images by trained image analysts. While analysts today are certainly involved in the exploitation of spectral imagery, automated tools are generally used as aids in the analysis and offer hope in the future of significantly reducing the analysis timeline and analyst work load. Thus, there is a recognized need for spectral image quality metrics that include the effects of automated algorithms. Previously, we have reported on candidate approaches for spectral quality metrics in the context of unresolved object detection. We have continued these efforts through the use of empirical trade studies in the context of ground cover terrain classification. HYDICE airborne hyperspectral imagery have been analyzed for the effects on scene classification accuracy of spatial resolution, signal-to-noise ratio, and number of spectral channels. Various classification algorithms including Gaussian maximum likelihood, spectral angle mapper, and Euclidean minimum distance have been considered. Performance metrics included classification accuracy, confusion matrices, and the Kappa coefficient. An extension of the previously developed Spectral Quality Equation (SQE) has been developed for the terrain classification application. As expected, the accuracy of terrain classification shows only modest sensitivity to the parameters considered, except at the extreme cases of high noise, few bands, and small ground resolution. However, these results are useful in continuing to develop the quantitative relationships necessary for characterizing the quality of spectral imagery in various applications

    RSSIM: A Simulation Program for Optical Remote Sensing Systems

    Get PDF
    RSSIM is a comprehensive simulation tool for the study of multispectral remotely sensed images and associated system parameters. It has been developed to allow the creation of realistic multispectral images based on detailed models of the surface the atmosphere, and the sensor. It also can be used to study the effect of system parameters on an output measure, such as classification accuracy or class separability. In this report the operation and use of RSSIM is described. In this first section the implementation of the program is discussed, followed by examples of its use. In section 2 the structure and algorithms used in the major subroutines, along with the associated parameter files are discussed. Section 3 provides a complete listing of the program code

    Modeling, Simulation, and Analysis of Optical Remote Sensing Systems

    Get PDF
    Remote Sensing of the Earth\u27s resources from space-based sensors has evolved in the past twenty years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990\u27s. These instruments represent a tremendous advance in sensor technology with data gathered In nearly 200 spectral bands, and with the ability for scientists to specify many observational parameters. It will be increasingly necessary for users of remote sensing systems to understand the tradeoffs and interrelationships of system parameters. In this report, two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented In a discrete simulation, This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These Class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HIRIS).; The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results. Reduced classification accuracy in hazy atmospheres is seen to be due not only to sensor noise, but also to the increased path radiance scattered from the surface. The effect of the atmosphere is also seen in its relationship to view angle. In clear atmospheres, increasing the zenith view angle is seen to result in an increase in classification accuracy due to the reduced scene variation as the ground size of image pixels is increased. However, in hazy atmospheres the reduced transmittance and increased path radiance counter this effect and result in decreased accuracy with increasing view angle. The relationship between the Signal-to:Noise Ratio (SNR) and classification accuracy is seen to depend in a complex manner on spatial parameters and feature selection. Higher SNR values are seen to hot always result in higher accuracies, and even in cases of low SNR feature sets chosen appropriately can lead to high accuracies

    EXPLORING LIMITS IN HYPERSPECTRAL UNRESOLVED OBJECT DETECTION

    Get PDF
    ABSTRACT Hyperspectral imaging systems have been shown to enable unresolved object detection through enhanced spectral characteristics of the data. Robust detection performance prediction tools are desirable for many reasons including optimal system design and operation. The research described in this paper explores the general understanding of system factors that limit detection performance. Examples are shown for detectability limits due to target subpixel fill fraction, sensor noise, and scene complexity

    A Comparative Evaluation of Spectral Quality Metrics for Hyperspectral Imagery

    Get PDF
    Quantitative methods to assess or predict the quality of a spectral image are the subject of a number of current research activities. An accepted methodology would be highly desirable to use for data collection tasking or data archive searches in way analogous to the current uses of the National Imagery Interpretation Rating Scale (NIIRS) General Image Quality Equation (GIQE). A number of approaches to the estimation of quality of a spectral image have been published. An issue with many of these approaches is that they tend to be constructed around specific tasks (target detection, background classification, etc.) While this has often been necessary to make the quality assessment tractable, it is desirable to have a method that is more general. One such general approach is presented in a companion paper (Simmons, et al1). This new approach seeks to get at the heart of the general spectral imagery quality analysis problem – assessing the confidence of an image analyst in performing a specified task with a specific spectral image. In this approach the quality from spatial and spectral aspects of the imagery are treated separately and then a fusion concept known as “semantic transformation” is used to combine the utility, or confidence, from these two aspects into an overall quality metric. This paper compares and contrasts the various methods published in the literature with this new General Spectral Utility Metric (GSUM). In particular, the methods are applied to a target detection problem using data from the airborne HYDICE instrument collected at Forest Radiance I. While the GSUM approach is seen to lead to intuitively pleasing results, its sensitivity to image parameters was not seen to be consistent with previously published approaches. However, this likely resulted more from limitations of the previous approaches than with problems with GSUM. Further studies with additional spectral imaging applications are recommended along with efforts to integrate a performance predication capability into the GSUM framework

    Generation of a Combined Dataset of Simulated Radar and EO/IR Imagery

    Get PDF
    In the world of remote sensing, both radar and EO/IR (electro-optical/infrared) sensors carry with them unique information useful to the imaging community. Radar has the capability of imaging through all types of weather, day or night. EO/IR produces radiance maps and frequently images at much finer resolution than radar. While each of these systems is valuable to imaging, there exists unknown territory in the imaging community as to the value added in combining the best of both these worlds. This work will begin to explore the challenges in simulating a scene in both a radar tool called Xpatch and an EO/IR tool called DIRSIG (Digital Imaging and Remote Sensing Image Generation). The capabilities and limitations inherent to both radar and EO/IR are similar in the image simulation tools, so the work done in a simulated environment will carry over to the real-world environment as well. The goal of this effort is to demonstrate an environment where EO/IR and radar images of common scenes can be simulated. Once demonstrated, this environment would be used to facilitate trade studies of various multi-sensor instrument design and exploitation algorithm concepts. The synthetic data generated will be compared to existing measured data to demonstrate the validity of the experiment

    Semi-Automated DIRSIG Scene Modeling from 3D LIDAR and Passive Imaging Sources

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (LIDAR) models have also been incorporated into the software, providing an extremely powerful tool for algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes “on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, we are initiating a research effort that aims to reduce the man-in-the-loop requirements for several aspects of synthetic hyperspectral scene construction. Through a fusion of 3D LIDAR data with passive imagery, we are working to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks will also realize a shortened implementation time through this application of multi-modal imagery. This paper reports on the progress made thus far in achieving these objectives
    corecore